Cortical integration of audio-visual speech and non-speech stimuli.

نویسندگان

  • Brent C Vander Wyk
  • Gordon J Ramsay
  • Caitlin M Hudac
  • Warren Jones
  • David Lin
  • Ami Klin
  • Su Mei Lee
  • Kevin A Pelphrey
چکیده

Using fMRI we investigated the neural basis of audio-visual processing of speech and non-speech stimuli using physically similar auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses). Relative to uni-modal stimuli, the different multi-modal stimuli showed increased activation in largely non-overlapping areas. Ellipse-Speech, which most resembles naturalistic audio-visual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. Circle-Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. Circle-Speech showed activation in lateral occipital cortex, and Ellipse-Tone did not show increased activation relative to uni-modal stimuli. Further analysis revealed that middle temporal regions, although identified as multi-modal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multi-modal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which multi-modal speech or non-speech percepts are evoked.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Using functional magnetic resonance imaging (fMRI) to explore brain function: cortical representations of language critical areas

Pre-operative determination of the dominant hemisphere for speech and speech associated sensory and motor regions has been of great interest for the neurological surgeons. This dilemma has been of at most importance, but difficult to achieve, requiring either invasive (Wada test) or non-invasive methods (Brain Mapping). In the present study we have employed functional Magnetic Resonance Imaging...

متن کامل

Using functional magnetic resonance imaging (fMRI) to explore brain function: cortical representations of language critical areas

Pre-operative determination of the dominant hemisphere for speech and speech associated sensory and motor regions has been of great interest for the neurological surgeons. This dilemma has been of at most importance, but difficult to achieve, requiring either invasive (Wada test) or non-invasive methods (Brain Mapping). In the present study we have employed functional Magnetic Resonance Imaging...

متن کامل

The effects of perceptual load and set on audio-visual speech integration

This study examines the hypothesis that audio-visual integration of speech requires both expectation to perceive speech and sufficient attentional resources to allow multimodal integration. Audio-visual integration was measured by recording susceptibility to the McGurk effect whilst participants simultaneously performed a primary visual task under conditions of high or low perceptual load. Acco...

متن کامل

Cortical operational synchrony during audio-visual speech integration.

Information from different sensory modalities is processed in different cortical regions. However, our daily perception is based on the overall impression resulting from the integration of information from multiple sensory modalities. At present it is not known how the human brain integrates information from different modalities into a unified percept. Using a robust phenomenon known as the McG...

متن کامل

Visual-auditory integration during speech imitation in autism.

Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal co...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Brain and cognition

دوره 74 2  شماره 

صفحات  -

تاریخ انتشار 2010